咳嗽音频信号分类是筛查呼吸道疾病(例如COVID-19)的潜在有用工具。由于从这种传染性疾病的患者那里收集数据是危险的,因此许多研究团队已转向众包来迅速收集咳嗽声数据,因为它是为了生成咳嗽数据集的工作。 Coughvid数据集邀请专家医生诊断有限数量上传的记录中存在的潜在疾病。但是,这种方法遭受了咳嗽的潜在标签,以及专家之间的显着分歧。在这项工作中,我们使用半监督的学习(SSL)方法来提高咳嗽数据集的标签一致性以及COVID-19的鲁棒性与健康的咳嗽声音分类。首先,我们利用现有的SSL专家知识聚合技术来克服数据集中的标签不一致和稀疏性。接下来,我们的SSL方法用于识别可用于训练或增加未来咳嗽分类模型的重新标记咳嗽音频样本的子样本。证明了重新标记的数据的一致性,因为它表现出高度的类可分离性,尽管原始数据集中存在专家标签不一致,但它比用户标记的数据高3倍。此外,在重新标记的数据中放大了用户标记的音频段的频谱差异,从而导致健康和COVID-19咳嗽之间的功率频谱密度显着不同,这既证明了新数据集的一致性及其与新数据的一致性及其与新数据的一致性的提高,其解释性与其与其解释性的一致性相同。声学的观点。最后,我们演示了如何使用重新标记的数据集来训练咳嗽分类器。这种SSL方法可用于结合几位专家的医学知识,以提高任何诊断分类任务的数据库一致性。
translated by 谷歌翻译
对于涉及连续的,半监督的学习以进行长期监测的应用程序,高维计算(HDC)作为机器学习范式非常有趣。但是,其准确性尚未与其他机器学习(ML)方法相提并论。允许快速设计空间探索以找到实用算法的框架对于使高清计算与其他ML技术竞争是必要的。为此,我们介绍了HDTORCH,这是一个开源的,基于Pytorch的HDC库,其中包含用于HyperVector操作的CUDA扩展名。我们通过使用经典和在线HD培训方法来分析四个HDC基准数据集,从而证明了HDTORCH的实用程序。我们为经典/在线HD的平均(训练)/推理速度分别为(111x/68x)/87x。此外,我们分析了不同的超参数对运行时和准确性的影响。最后,我们演示了HDTORCH如何实现对大型现实世界数据集应用的HDC策略的探索。我们对CHB-MIT EEG癫痫数据库进行了首个高清训练和推理分析。结果表明,在一部分数据子集上训练的典型方法不一定会推广到整个数据集,这是开发医疗可穿戴设备的未来HD模型时的重要因素。
translated by 谷歌翻译
目的:通过可穿戴传感器持续监测生物信号,在医疗和健康领域迅速扩展。在静止时,自动检测重要参数通常是准确的。然而,在诸如高强度运动的条件下,信号发生突然的生理变化,损害标准算法的鲁棒性。方法:我们的方法称为Bayeslope,是基于无监督的学习,贝叶斯滤波和非线性归一化,并根据ECG中的预期位置来增强和正确地检测R峰值。此外,随着贝叶克洛斯的计算沉重并且可以快速排出设备电池,我们提出了一种在线设计,可使其突然生理变化以及对现代嵌入式平台的异构资源的复杂性。该方法将Bayeslope与轻量级算法相结合,在具有不同能力的核心中执行,以减少能量消耗,同时保持精度。结果:贝森普洛普在激进的骑自行车运动中实现了99.3%的F1得分为99.3%。此外,在线自适应过程在五种不同的运动强度上实现了99%的F1得分,总能耗为1.55±0.54〜MJ。结论:我们提出了一种高度准确和稳健的方法,以及在现代超低功耗嵌入式平台中的完整节能实现,以提高攻击条件下的R峰值检测,例如在高强度运动期间。重要意义:实验表明,贝叶普洛斯在F1分数中优于8.4%的最先进的算法,而我们的在线自适应方法可以在现代异构可穿戴平台上达到高达38.7%的节能。
translated by 谷歌翻译
癫痫患者的长期监测来自实时检测和可穿戴设备设计的工程角度呈现出具有挑战性的问题。它需要新的解决方案,允许连续无阻碍的监控和可靠的癫痫发作检测和预测。在癫痫发作期间的人,脑状态和时间实例中存在脑电图(EEG)模式的高可变性,而且在非扣押期间。这使得癫痫癫痫发作检测非常具有挑战性,特别是如果数据仅在癫痫发作和非癫痫标签下分组。超方(HD)计算,一种新型机器学习方法,作为一个有前途的工具。但是,当数据显示高级别的可变性时,它具有一定的限制。因此,在这项工作中,我们提出了一种基于多心高清计算的新型半监督学习方法。多质心方法允许有几个代表癫痫发作和非癫痫发作状态的原型向量,这导致与简单的2级HD模型相比显着提高了性能。此外,现实生活数据不平衡造成了额外的挑战,并且在数据的平衡子集上报告的性能可能被高估。因此,我们测试我们的多质心方法,具有三个不同的数据集平衡方案,显示较少平衡数据集的性能提升更高。更具体地,在不平衡的测试集上实现了高达14%的改进,而不是比癫痫发作数据更加不癫痫发布的10倍。与此同时,与平衡数据集相比,子类的总数不会显着增加。因此,所提出的多质心方法可以是实现具有现实数据余额或在线学习期间实现高性能的重要因素,癫痫发作不常见。
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise "data" and the subsequent "deployment".
translated by 谷歌翻译
Factorization machines (FMs) are a powerful tool for regression and classification in the context of sparse observations, that has been successfully applied to collaborative filtering, especially when side information over users or items is available. Bayesian formulations of FMs have been proposed to provide confidence intervals over the predictions made by the model, however they usually involve Markov-chain Monte Carlo methods that require many samples to provide accurate predictions, resulting in slow training in the context of large-scale data. In this paper, we propose a variational formulation of factorization machines that allows us to derive a simple objective that can be easily optimized using standard mini-batch stochastic gradient descent, making it amenable to large-scale data. Our algorithm learns an approximate posterior distribution over the user and item parameters, which leads to confidence intervals over the predictions. We show, using several datasets, that it has comparable or better performance than existing methods in terms of prediction accuracy, and provide some applications in active learning strategies, e.g., preference elicitation techniques.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
This case study investigates the extent to which a language model (GPT-2) is able to capture native speakers' intuitions about implicit causality in a sentence completion task. We first reproduce earlier results (showing lower surprisal values for pronouns that are congruent with either the subject or object, depending on which one corresponds to the implicit causality bias of the verb), and then examine the effects of gender and verb frequency on model performance. Our second study examines the reasoning ability of GPT-2: is the model able to produce more sensible motivations for why the subject VERBed the object if the verbs have stronger causality biases? We also developed a methodology to avoid human raters being biased by obscenities and disfluencies generated by the model.
translated by 谷歌翻译
Semi-supervised anomaly detection is a common problem, as often the datasets containing anomalies are partially labeled. We propose a canonical framework: Semi-supervised Pseudo-labeler Anomaly Detection with Ensembling (SPADE) that isn't limited by the assumption that labeled and unlabeled data come from the same distribution. Indeed, the assumption is often violated in many applications - for example, the labeled data may contain only anomalies unlike unlabeled data, or unlabeled data may contain different types of anomalies, or labeled data may contain only 'easy-to-label' samples. SPADE utilizes an ensemble of one class classifiers as the pseudo-labeler to improve the robustness of pseudo-labeling with distribution mismatch. Partial matching is proposed to automatically select the critical hyper-parameters for pseudo-labeling without validation data, which is crucial with limited labeled data. SPADE shows state-of-the-art semi-supervised anomaly detection performance across a wide range of scenarios with distribution mismatch in both tabular and image domains. In some common real-world settings such as model facing new types of unlabeled anomalies, SPADE outperforms the state-of-the-art alternatives by 5% AUC in average.
translated by 谷歌翻译